Bensenville
TraitSpaces: Towards Interpretable Visual Creativity for Human-AI Co-Creation
We introduce a psychologically grounded and artist-informed framework for modeling visual creativity across four domains: Inner, Outer, Imaginative, and Moral Worlds. Drawing on interviews with practicing artists and theories from psychology, we define 12 traits that capture affective, symbolic, cultural, and ethical dimensions of creativity.Using 20k artworks from the SemArt dataset, we annotate images with GPT 4.1 using detailed, theory-aligned prompts, and evaluate the learnability of these traits from CLIP image embeddings. Traits such as Environmental Dialogicity and Redemptive Arc are predicted with high reliability ($R^2 \approx 0.64 - 0.68$), while others like Memory Imprint remain challenging, highlighting the limits of purely visual encoding. Beyond technical metrics, we visualize a "creativity trait-space" and illustrate how it can support interpretable, trait-aware co-creation - e.g., sliding along a Redemptive Arc axis to explore works of adversity and renewal. By linking cultural-aesthetic insights with computational modeling, our work aims not to reduce creativity to numbers, but to offer shared language and interpretable tools for artists, researchers, and AI systems to collaborate meaningfully.
- North America > United States > Texas (0.04)
- North America > United States > Indiana (0.04)
- North America > United States > Illinois > DuPage County > Bensenville (0.04)
- (3 more...)
A Survey on Large Language Model Hallucination via a Creativity Perspective
Jiang, Xuhui, Tian, Yuxing, Hua, Fengrui, Xu, Chengjin, Wang, Yuanzhuo, Guo, Jian
Hallucinations in large language models (LLMs) are always seen as limitations. However, could they also be a source of creativity? This survey explores this possibility, suggesting that hallucinations may contribute to LLM application by fostering creativity. This survey begins with a review of the taxonomy of hallucinations and their negative impact on LLM reliability in critical applications. Then, through historical examples and recent relevant theories, the survey explores the potential creative benefits of hallucinations in LLMs. To elucidate the value and evaluation criteria of this connection, we delve into the definitions and assessment methods of creativity. Following the framework of divergent and convergent thinking phases, the survey systematically reviews the literature on transforming and harnessing hallucinations for creativity in LLMs. Finally, the survey discusses future research directions, emphasizing the need to further explore and refine the application of hallucinations in creative processes within LLMs.
- Overview (1.00)
- Research Report > Promising Solution (0.68)
Automatic Assessment of Divergent Thinking in Chinese Language with TransDis: A Transformer-Based Language Model Approach
Yang, Tianchen, Zhang, Qifan, Sun, Zhaoyang, Hou, Yubo
Language models have been increasingly popular for automatic creativity assessment, generating semantic distances to objectively measure the quality of creative ideas. However, there is currently a lack of an automatic assessment system for evaluating creative ideas in the Chinese language. To address this gap, we developed TransDis, a scoring system using transformer-based language models, capable of providing valid originality (quality) and flexibility (variety) scores for Alternative Uses Task (AUT) responses in Chinese. Study 1 demonstrated that the latent model-rated originality factor, comprised of three transformer-based models, strongly predicted human originality ratings, and the model-rated flexibility strongly correlated with human flexibility ratings as well. Criterion validity analyses indicated that model-rated originality and flexibility positively correlated to other creativity measures, demonstrating similar validity to human ratings. Study 2 & 3 showed that TransDis effectively distinguished participants instructed to provide creative vs. common uses (Study 2) and participants instructed to generate ideas in a flexible vs. persistent way (Study 3). Our findings suggest that TransDis can be a reliable and low-cost tool for measuring idea originality and flexibility in Chinese language, potentially paving the way for automatic creativity assessment in other languages. We offer an open platform to compute originality and flexibility for AUT responses in Chinese and over 50 other languages (https://osf.io/59jv2/).
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > New Jersey (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Research Report > Promising Solution (0.86)
Modelling Creativity: Identifying Key Components through a Corpus-Based Approach
As Torrance observes: '[c]reativity defies precise definition... even if we had a precise conception of creativity, I am certain we would have difficulty putting it into words' [15, p. 43]. Many other authors have expressed similar difficulties [7, 10, 16]. In their review of research into human creativity, Hennessey and Amabile ask a significant follow-on question: 'Even if this mysterious phenomenon can be isolated, quantified, and dissected, why bother? Wouldn't it make more sense to revel in the mystery and wonder of it all?' [11, p. 570] Two answers to this question are offered by Hennessey and Amabile, both of which are identified as desirable: to gain a deeper understanding of creativity and to learn how to boost people's creativity. Creativity can and should be studied and measured scientifically, but the lack of a commonly-agreed understanding causes problems for measurement [10]. Plucker et al. make recommendations about best practice based on their own survey of the creativity literature: 'we argue that creativity researchers must (a) explicitly define what they mean by creativity, (b) avoid using scores of creativity measures as the sole definition of creativity (e.g., creativity is what creativity tests measure and creativity tests measure creativity, therefore we will use a score on a creativity test as our outcome variable), (c) discuss how the definition they are using is similar to or different from other definitions, and (d) address the question of creativity for whom and in what context.' [9, p.92] In short, we need to specify and justify the standards that we use to judge creativity. A more objective and well-articulated account of how creativity is manifested enables researchers to make a worthwhile contribution [8-10]. Particularly, in research we would like to focus on what processes and concepts relevant to creativity are'sufficiently important to warrant study' [17, p. 15], based on an accumulation of the body of work on creativity to date [17].
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > East Sussex > Brighton (0.04)
- (20 more...)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (0.66)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.93)
Toward an Intelligent Agent for Fraud Detection — The CFE Agent
Johnson, Joe (Rensselaer Polytechnic Institute)
One of the primary realms into which artificial intelligence research has ventured is that of psychometric tests. It has been debated since Alan Turing proposed the Turing Test whether performance on tests should serve as the metric by which we should determine whether a machine is intelligent. This is an idea that may either solidify or challenge, depending on the reader's predisposition, one's sense of what artificial intelligence really is. As will be discussed in this paper, there is a history of efforts to create agents that perform well on tests in the spirit of an interpretation of artificial intelligence called ``Psychometric AI''. However, the focus of this paper is to describe a machine agent, hereafter called the CFE Agent, developed in this tradition. The CFE Exam is a gateway to certification in the Association of Certified Fraud Examiners (ACFE), a widely recognized professional credential within the fraud examiner profession. The CFE Agent attempts to emulate the successful performance of a human test taker, using what would appear to be simplistic natural language processing approaches to answer test questions. But it is also hoped that the the reader will be convinced that the same core technologies can be successfully applied within the larger domain of fraud detection. Further work will also be briefly discussed, in which we attempt to take these techniques to the next level, a deeper level, by which we can get a better sense of the knowledge the agent is using, and how that knowledge is being applied to formulate answers.
- North America > United States > New York > Rensselaer County > Troy (0.04)
- North America > United States > New Jersey > Bergen County > Mahwah (0.04)
- North America > United States > Illinois > DuPage County > Bensenville (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)